Last Update: 2025/3/26
SenseFlow Chat API
The SenseFlow Chat API allows you to build conversational AI applications with context awareness. This is suitable for chatbots, virtual assistants, and other applications requiring conversation history.
Endpoints
Chat Message
POST https://platform.llmprovider.ai/v1/agent/chat-messages
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Content-Type | application/json |
Request Body
Parameter | Type | Description |
---|---|---|
model | string | agent name |
query | string | User input/question content |
inputs | object | (Optional) Key-value pairs of variables defined in the app |
files | array | (Optional) Array of file objects for multimodal interactions |
response_mode | string | Response mode: streaming (recommended) or blocking |
conversation_id | string | (Optional) Conversation ID for continuing previous chats |
user | string | Unique identifier for the end user |
Files Object Structure
Field | Type | Description |
---|---|---|
type | string | File type (image 、document , audio , video , custom ) |
transfer_method | string | remote_url or local_file |
url | string | File URL (when transfer_method is remote_url ) |
upload_file_id | string | File ID (when transfer_method is local_file ) |
Response
The response varies based on the response_mode
:
- For
blocking
mode: Returns aChatResponse
object - For
streaming
mode: Returns a stream ofChunkChatResponse
objects
ChatResponse Structure
Field | Type | Description |
---|---|---|
id | string | Message ID |
conversation_id | string | Conversation ID |
answer | string | Complete response content |
message_files | array | Array of message files (for assistant responses) |
metadata | object | Metadata information |
created_at | integer | Message creation timestamp |
Example Response (Blocking Mode)
{
"id": "msg_12345",
"conversation_id": "conv_12345",
"answer": "Based on the image, I can see...",
"message_files": [],
"metadata": {},
"created_at": 1705569239
}
Streaming Response Events
Each streaming chunk starts with data:
and chunks are separated by \n\n
. The events follow the same structure as the
Completion API.
Streaming Response Events
Each streaming chunk starts with data:
and chunks are separated by \n\n
. Example:
data: {"event": "message", "task_id": "900bbd43-dc0b-4383-a372-aa6e6c414227", "message_id": "663c5084", "answer": "Hi", "created_at": 1705398420}\n\n
Different event types in the stream:
Event Type | Description | Fields |
---|---|---|
message | text chunk from LLM | task_id, message_id, answer, created_at |
message_file | a new file has created by tool | id, type, belongs_to, url, conversation_id |
message_end | receiving this event means streaming has ended | task_id, message_id, metadata, usage, retriever_resources |
tts_message | Speech synthesis chunk | task_id, message_id, audio (base64), created_at |
tts_message_end | Speech synthesis completion | task_id, message_id, audio (empty), created_at |
message_replace | Content moderation replacement | task_id, message_id, answer, created_at |
workflow_started | workflow starts execution | task_id, workflow_run_id, event, data(id, workflow_id, sequence_number, created_at) |
node_started | node execution started | task_id, workflow_run_id, event, data(id, node_id, node_type, title, index, predecessor_node_id, inputs, created_at) |
node_finished | node execution ended | task_id, workflow_run_id, event, data(id, node_id, node_type, title, index, predecessor_node_id, inputs, created_at) |
workflow_finished | workflow execution ended | task_id, workflow_run_id, event, data(id, workflow_id, status, outputs, error, elapsed_time, total_tokens, total_steps, created_at, finished_at) |
error | Stream error event | task_id, message_id, status, code, message |
ping | Keep-alive ping (every 10s) | - |
Example Request
- Shell
- Python
- Node.js
curl -X POST 'https://platform.llmprovider.ai/v1/agent/chat-messages' \
--header 'Authorization: Bearer $YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": ""
"query": "What can you tell me about this image?",
"files": [
{
"type": "image",
"transfer_method": "remote_url",
"url": "https://example.com/image.jpg"
}
],
"response_mode": "streaming",
"conversation_id": "conv_12345",
"user": "abc-123"
}'
import requests
import json
api_key = 'YOUR_API_KEY'
url = 'https://platform.llmprovider.ai/v1/agent/chat-messages'
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
data = {
'agent': ''
'query': 'What can you tell me about this image?',
'files': [{
'type': 'image',
'transfer_method': 'remote_url',
'url': 'https://example.com/image.jpg'
}],
'response_mode': 'streaming',
'conversation_id': 'conv_12345',
'user': 'abc-123'
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/agent/chat-messages';
const data = {
model: ''
query: 'What can you tell me about this image?',
files: [{
type: 'image',
transfer_method: 'remote_url',
url: 'https://example.com/image.jpg'
}],
response_mode: 'streaming',
conversation_id: 'conv_12345',
user: 'abc-123'
};
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
};
axios.post(url, data, {headers})
.then(response => console.log(response.data))
.catch(error => console.error(error));
Upload File
POST https://platform.llmprovider.ai/v1/agent/files/upload
Upload files (currently only supports images) for use in multimodal interactions. Supports png, jpg, jpeg, webp, and gif formats.
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Content-Type | multipart/form-data |
Request Body Parameters
Parameter | Type | Description |
---|---|---|
model | string | agent name |
file | file | The file to upload |
user | string | User identifier (must match the message API user ID) |
Response
Field | Type | Description |
---|---|---|
id | uuid | File ID |
name | string | File name |
size | integer | File size in bytes |
extension | string | File extension |
mime_type | string | File MIME type |
created_by | uuid | Uploader ID |
created_at | timestamp | Upload timestamp |
Example Response
{
"id": "72fa9618-8f89-4a37-9b33-7e1178a24a67",
"name": "example.png",
"size": 1024,
"extension": "png",
"mime_type": "image/png",
"created_by": 123,
"created_at": 1577836800
}
Example Request
- Shell
- Python
- Node.js
curl -X POST 'https://api.dify.ai/v1/agent/files/upload' \
--header 'Authorization: Bearer {api_key}' \
--form 'file=@localfile;type=image/[png|jpeg|jpg|webp|gif] \
--form 'model='
--form 'user=abc-123'
import requests
import json
api_key = 'YOUR_API_KEY'
url = 'https://api.dify.ai/v1/agent/files/upload'
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
files = {
'model': ('')
'file': ('image.jpg', open('localfile', 'rb'), 'image/jpeg'),
'user': (None, 'abc-123')
}
response = requests.post(url, headers=headers, files=files)
print(response.json())
const FormData = require('form-data');
const fs = require('fs');
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/agent/files/upload';
const form = new FormData();
form.append('file', fs.createReadStream('localfile'));
form.append('user', 'abc-123');
form.append('model', '');
const headers = {
...form.getHeaders(),
'Authorization': `Bearer ${apiKey}`
};
axios.post(url, form, {headers})
.then(response => console.log(response.data))
.catch(error => console.error(error));
Stop Response
POST https://platform.llmprovider.ai/v1/agent/chat-messages/:task_id/stop
Stop a streaming response. Only available for streaming mode.
Request Headers
Header | Value |
---|---|
Authorization | Bearer YOUR_API_KEY |
Content-Type | application/json |
Path Parameters
Parameter | Type | Description |
---|---|---|
task_id | string | Task ID obtained from the streaming response |
Request Body Parameters
Parameter | Type | Required | Description |
---|---|---|---|
user | string | Yes | User identifier (must match the message API user ID) |
model | string | model name |
Response
Field | Type | Description |
---|---|---|
result | string | upload result |
Example Response
{
"result": "success"
}
Example Request
- Shell
- Python
- nodejs
curl -X POST 'https://platform.llmprovider.ai/v1/agent/chat-messages/task_123/stop' \
--header 'Authorization: Bearer $YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
"agent": "",
"user": "abc-123"
}'
import requests
import json
api_key = 'YOUR_API_KEY'
url = 'https://platform.llmprovider.ai/v1/agent/chat-messages/task_123/stop'
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
data = {
'model': '',
'user': 'abc-123'
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
const axios = require('axios');
const apiKey = 'YOUR_API_KEY';
const url = 'https://platform.llmprovider.ai/v1/agent/chat-messages/task_123/stop';
const data = {
model: ''
user: 'abc-123'
};
const headers = {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
};
axios.post(url, data, {headers})
.then(response => console.log(response.data))
.catch(error => console.error(error));